48 research outputs found
Optimizing Ontology Alignments through NSGA-II without Using Reference Alignment
Ontology is widely used to solve the data heterogeneity problems on the semantic web, but the available ontologies could themselves introduce heterogeneity. In order to reconcile these ontologies to implement the semantic interoperability, we need to find the relationships among the entities in various ontologies, and the process of identifying them is called ontology alignment. In all the existing matching systems that use evolutionary approaches to optimize their parameters, a reference alignment between two ontologies to be aligned should be given in advance which could be very expensive to obtain especially when the scale of ontologies is considerably large. To address this issue, in this paper we propose a novel approach to utilize the NSGA-II to optimize the ontology alignments without using the reference alignment. In our approach, an adaptive aggregation strategy is presented to improve the efficiency of optimizing process and two approximate evaluation measures, namely match coverage and match ratio, are introduced to replace the classic recall and precision on reference alignment to evaluate the quality of the alignments. Experimental results show that our approach is effective and can find the solutions that are very close to those obtained by the approaches using reference alignment, and the quality of alignments is in general better than that of state of the art ontology matching systems such as GOAL and SAMBO
A Novel Self-Supervised Learning-Based Anomaly Node Detection Method Based on an Autoencoder in Wireless Sensor Networks
Due to the issue that existing wireless sensor network (WSN)-based anomaly
detection methods only consider and analyze temporal features, in this paper, a
self-supervised learning-based anomaly node detection method based on an
autoencoder is designed. This method integrates temporal WSN data flow feature
extraction, spatial position feature extraction and intermodal WSN correlation
feature extraction into the design of the autoencoder to make full use of the
spatial and temporal information of the WSN for anomaly detection. First, a
fully connected network is used to extract the temporal features of nodes by
considering a single mode from a local spatial perspective. Second, a graph
neural network (GNN) is used to introduce the WSN topology from a global
spatial perspective for anomaly detection and extract the spatial and temporal
features of the data flows of nodes and their neighbors by considering a single
mode. Then, the adaptive fusion method involving weighted summation is used to
extract the relevant features between different models. In addition, this paper
introduces a gated recurrent unit (GRU) to solve the long-term dependence
problem of the time dimension. Eventually, the reconstructed output of the
decoder and the hidden layer representation of the autoencoder are fed into a
fully connected network to calculate the anomaly probability of the current
system. Since the spatial feature extraction operation is advanced, the
designed method can be applied to the task of large-scale network anomaly
detection by adding a clustering operation. Experiments show that the designed
method outperforms the baselines, and the F1 score reaches 90.6%, which is 5.2%
higher than those of the existing anomaly detection methods based on
unsupervised reconstruction and prediction. Code and model are available at
https://github.com/GuetYe/anomaly_detection/GLS
DHRL-FNMR: An Intelligent Multicast Routing Approach Based on Deep Hierarchical Reinforcement Learning in SDN
The optimal multicast tree problem in the Software-Defined Networking (SDN)
multicast routing is an NP-hard combinatorial optimization problem. Although
existing SDN intelligent solution methods, which are based on deep
reinforcement learning, can dynamically adapt to complex network link state
changes, these methods are plagued by problems such as redundant branches,
large action space, and slow agent convergence. In this paper, an SDN
intelligent multicast routing algorithm based on deep hierarchical
reinforcement learning is proposed to circumvent the aforementioned problems.
First, the multicast tree construction problem is decomposed into two
sub-problems: the fork node selection problem and the construction of the
optimal path from the fork node to the destination node. Second, based on the
information characteristics of SDN global network perception, the multicast
tree state matrix, link bandwidth matrix, link delay matrix, link packet loss
rate matrix, and sub-goal matrix are designed as the state space of intrinsic
and meta controllers. Then, in order to mitigate the excessive action space,
our approach constructs different action spaces at the upper and lower levels.
The meta-controller generates an action space using network nodes to select the
fork node, and the intrinsic controller uses the adjacent edges of the current
node as its action space, thus implementing four different action selection
strategies in the construction of the multicast tree. To facilitate the
intelligent agent in constructing the optimal multicast tree with greater
speed, we developed alternative reward strategies that distinguish between
single-step node actions and multi-step actions towards multiple destination
nodes
Recommended from our members
Robust zero watermarking algorithm for medical images based on Zernike-DCT
Digital medical system not only facilitates the storage and transmission of medical information but also brings information security problems. Aiming at the security of medical images, a robust zero watermarking algorithm for medical images based on Zernike-DCT is proposed. *e algorithm first uses a chaotic logic sequence to preprocess and encrypt the watermark, then performs edge detection and Zernike moment processing on the original medical image to get the accurate edge points, and then performs discrete cosine transform (DCT) on them to get the feature vector. Finally, it combines perceptual Hash and zero watermark technology to generate the key to complete the watermark embedding and extraction. *e algorithm has good robustness to conventional and geometric attacks, strong antinoise ability, high positioning accuracy, and processing efficiency and is superior to the classical edge detection algorithm in extraction effect. It is a stable and reliable image edge detection algorithm
Recommended from our members
Robust and secure zero-watermarking algorithm for medical images based on Harris-SURF-DCT and chaotic map
To protect the patient information in medical images, this article proposes a robust watermarking algorithm for medical images based on Harris-SURF-DCT. First, the corners of the medical image are extracted using the Harris corner detection algorithm, and then, the previously extracted corners are described using the method of describing feature points in the SURF algorithm to generate the feature descriptor matrix. *en, the feature descriptor matrix is processed through the perceptual hash algorithm to obtain the feature vector of the medical image, which is a binary feature vector with a size of 32 bits. Secondly, to enhance the security of the watermark information, the logistic map algorithm is used to encrypt the watermark before embedding the watermark. Finally, with the help of cryptography knowledge, third party, and zero-watermarking technology, the algorithm can embed the watermark without modifying the medical image. When extracting the watermark, the algorithm can extract the watermark from the test image without the original image. In addition, the algorithm has strong robustness to conventional attacks and geometric attacks. Especially under geometric attacks, the algorithm performs better
Machine Learning Approach for Prediction of Lateral Confinement Coefficient of CFRP-Wrapped RC Columns
Materials have a significant role in creating structures that are durable, valuable and possess symmetry engineering properties. Premium quality materials establish an exemplary environment for every situation. Among the composite materials in constructions, carbon fiber reinforced polymer (CFRP) is one of best materials which provides symmetric superior strength and stiffness to reinforced concrete structures. For the structure to be confining, the region jeopardizes seismic loads and axial force, specifically on columns, with limited proportion of ties or stirrups implemented to loftier ductility and brittleness. The failure and buckling of columns with CFRP has been studied by many researchers and is ongoing to determine ways columns can be retrofitted. This article symmetrically integrates two disciplines, specifically materials (CFRP) and computer application (machine learning). Technically, predicting the lateral confinement coefficient (Ks) for reinforced concrete columns in designs plays a vital role. Therefore, machine learning models like genetic programming (GP), minimax probability machine regression (MPMR) and deep neural networks (DNN) were utilized to determine the Ks value of CFRP-wrapped RC columns. In order to compute Ks value, parameters such as column width, length, corner radius, thickness of CFRP, compressive strength of the unconfined concrete and elastic modulus of CFRP act as stimulants. The adopted machine learning models utilized 293 datasets of square and rectangular RC columns for the prediction of Ks. Among the developed models, GP and MPMR provide encouraging performances with higher R values of 0.943 and 0.941; however, the statistical indices proved that the GP model outperforms other models with better precision (R2 = 0.89) and less errors (RMSE = 0.056 and NMBE = 0.001). Based on the evaluation of statistical indices, rank analysis was carried out, in which GP model secured more points and ranked top
Matching Biomedical Ontologies through Adaptive Multi-Modal Multi-Objective Evolutionary Algorithm
To integrate massive amounts of heterogeneous biomedical data in biomedical ontologies and to provide more options for clinical diagnosis, this work proposes an adaptive Multi-modal Multi-Objective Evolutionary Algorithm (aMMOEA) to match two heterogeneous biomedical ontologies by finding the semantically identical concepts. In particular, we first propose two evaluation metrics on the alignment’s quality, which calculate the alignment’s statistical and its logical features, i.e., its f-measure and its conservativity. On this basis, we build a novel multi-objective optimization model for the biomedical ontology matching problem. By analyzing the essence of this problem, we point out that it is a large-scale Multi-modal Multi-objective Optimization Problem (MMOP) with sparse Pareto optimal solutions. Then, we propose a problem-specific aMMOEA to solve this problem, which uses the Guiding Matrix (GM) to adaptively guide the algorithm’s convergence and diversity in both objective and decision spaces. The experiment uses Ontology Alignment Evaluation Initiative (OAEI)’s biomedical tracks to test aMMOEA’s performance, and comparisons with two state-of-the-art MOEA-based matching techniques and OAEI’s participants show that aMMOEA is able to effectively determine diverse solutions for decision makers
Optimizing Sensor Ontology Alignment through Compact co-Firefly Algorithm
Semantic Sensor Web (SSW) links the semantic web technique with the sensor network, which utilizes sensor ontology to describe sensor information. Annotating sensor data with different sensor ontologies can be of help to implement different sensor systems’ inter-operability, which requires that the sensor ontologies themselves are inter-operable. Therefore, it is necessary to match the sensor ontologies by establishing the meaningful links between semantically related sensor information. Since the Swarm Intelligent Algorithm (SIA) represents a good methodology for addressing the ontology matching problem, we investigate a popular SIA, that is, the Firefly Algorithm (FA), to optimize the ontology alignment. To save the memory consumption and better trade off the algorithm’s exploitation and exploration, in this work, we propose a general-purpose ontology matching technique based on Compact co-Firefly Algorithm (CcFA), which combines the compact encoding mechanism with the co-Evolutionary mechanism. Our proposal utilizes the Gray code to encode the solutions, two compact operators to respectively implement the exploiting strategy and exploring strategy, and two Probability Vectors (PVs) to represent the swarms that respectively focuses on the exploitation and exploration. Through the communications between two swarms in each generation, CcFA is able to efficiently improve the searching efficiency when addressing the sensor ontology matching problem. The experiment utilizes the Conference track and three pairs of real sensor ontologies to test our proposal’s performance. The statistical results show that CcFA based ontology matching technique can effectively match the sensor ontologies and other general ontologies in the domain of organizing conferences
Solving Ontology Metamatching Problem through Improved Multiobjective Particle Swarm Optimization Algorithm
In recent years, knowledge representation in the Artificial Intelligence (AI) domain is able to help people understand the semantics of data and improve the interoperability between diverse knowledge-based applications. Semantic Web (SW), as one of the methods of knowledge representation, is the new generation of World Wide Web (WWW), which integrates AI with web techniques and dedicates to implementing the automatic cooperations among different intelligent applications. Ontology, as an information exchange model that defines concepts and formally describes the relationships between two concepts, is the core technique of SW, implementing semantic information sharing and data interoperability in the Internet of Things (IoT) domain. However, the heterogeneity issue hampers the communications among different ontologies and stops the cooperations among ontology-based intelligent applications. To solve this problem, it is vital to establish semantic relationships between heterogeneous ontologies, which is the so-called ontology matching. Ontology metamatching problem is commonly a complex optimization problem with many local optima. To this end, the ontology metamatching problem is defined as a multiobjective optimization model in this work, and a multiobjective particle swarm optimization (MOPSO) with diversity enhancing (DE) (MOPSO-DE) strategy is proposed to better trade off the convergence and diversity of the population. The well-known benchmark of the Ontology Alignment Evaluation Initiative (OAEI) is used in the experiment to test MOPSO-DE’s performance. Experimental results prove that MOPSO-DE can obtain the high-quality alignment and reduce the MOPSO’s memory consumption